80 research outputs found

    Deep Learning for Embedding and Integrating Multimodal Biomedical Data

    Get PDF
    Biomedical data is being generated in extremely high throughput and high dimension by technologies in areas ranging from single-cell genomics, proteomics, and transcriptomics (cytometry, single-cell RNA and ATAC sequencing) to neuroscience and cognition (fMRI and PET) to pharmaceuticals (drug perturbations and interactions). These new and emerging technologies and the datasets they create give an unprecedented view into the workings of their respective biological entities. However, there is a large gap between the information contained in these datasets and the insights that current machine learning methods can extract from them. This is especially the case when multiple technologies can measure the same underlying biological entity or system. By separately analyzing the same system but from different views gathered by different data modalities, patterns are left unobserved if they only emerge from the multi-dimensional joint representation of all of the modalities together. Through an interdisciplinary approach that emphasizes active collaboration with data domain experts, my research has developed models for data integration, extracting important insights through the joint analysis of varied data sources. In this thesis, I discuss models that address this task of multi-modal data integration, especially generative adversarial networks (GANs) and autoencoders (AEs). My research has been focused on using both of these models in a generative way for concrete problems in cutting-edge scientific applications rather than the exclusive focus on the generation of high-resolution natural images. The research in this thesis is united around ideas of building models that can extract new knowledge from scientific data inaccessible to currently existing methods

    ISS emergency scenarios and a virtual training simulator for Flight Controllers

    Get PDF
    The current emergency response concept for the International Space Station (ISS) includes the support of the Flight Control Team. Therefore, the team members need to be trained in emergencies and the corresponding crew procedures to ensure a smooth collaboration between crew and ground. In the case where the astronaut and ground personnel training is not collocated it is a challenging endeavor to ensure and maintain proper knowledge and skills for the Flight Control Team. Therefore, a virtual 3D simulator at the Columbus Control Center (Col-CC) is presented, which is used for ground personnel training in the on-board emergency response. The paper briefly introduces the main ISS emergency scenarios and the corresponding response strategy, details the resulting learning objectives for the Flight Controllers and elaborates on the new simulation method, which will be used in the future. The status of the 3D simulator, first experiences and further plans are discussed

    CUTS: A Fully Unsupervised Framework for Medical Image Segmentation

    Full text link
    In this work we introduce CUTS (Contrastive and Unsupervised Training for Segmentation) the first fully unsupervised deep learning framework for medical image segmentation, facilitating the use of the vast majority of imaging data that is not labeled or annotated. Segmenting medical images into regions of interest is a critical task for facilitating both patient diagnoses and quantitative research. A major limiting factor in this segmentation is the lack of labeled data, as getting expert annotations for each new set of imaging data or task can be expensive, labor intensive, and inconsistent across annotators: thus, we utilize self-supervision based on pixel-centered patches from the images themselves. Our unsupervised approach is based on a training objective with both contrastive learning and autoencoding aspects. Previous contrastive learning approaches for medical image segmentation have focused on image-level contrastive training, rather than our intra-image patch-level approach or have used this as a pre-training task where the network needed further supervised training afterwards. By contrast, we build the first entirely unsupervised framework that operates at the pixel-centered-patch level. Specifically, we add novel augmentations, a patch reconstruction loss, and introduce a new pixel clustering and identification framework. Our model achieves improved results on several key medical imaging tasks, as verified by held-out expert annotations on the task of segmenting geographic atrophy (GA) regions of images of the retina
    • …
    corecore